本文提出了创造和管理12个主要印度语言的大型并行语言(即将扩展到23种语言)的挑战,作为由信息技术部(DIT),政府部门资助的主要财团项目的一部分。印度,并在印度的10所不同大学中平行运行。为了有效地管理这些巨大的Corpora的创建和传播过程,基于Web的(具有减少的独立版本)的注释工具ILCiann(印度语言语料集团倡议注释工具)已经开发出来。它主要是为POS注释制定的,以及由具有不同竞争力和物理位于相距远的地点的人员的管理器的管理。为了维持在创建Corpora中的一致性和标准,有必要每个人都在这个工具提供的共同平台上。
translated by 谷歌翻译
We propose an ensemble approach to predict the labels in linear programming word problems. The entity identification and the meaning representation are two types of tasks to be solved in the NL4Opt competition. We propose the ensembleCRF method to identify the named entities for the first task. We found that single models didn't improve for the given task in our analysis. A set of prediction models predict the entities. The generated results are combined to form a consensus result in the ensembleCRF method. We present an ensemble text generator to produce the representation sentences for the second task. We thought of dividing the problem into multiple small tasks due to the overflow in the output. A single model generates different representations based on the prompt. All the generated text is combined to form an ensemble and produce a mathematical meaning of a linear programming problem.
translated by 谷歌翻译
Datacenter operators ensure fair and regular server maintenance by using automated processes to schedule maintenance jobs to complete within a strict time budget. Automating this scheduling problem is challenging because maintenance job duration varies based on both job type and hardware. While it is tempting to use prior machine learning techniques for predicting job duration, we find that the structure of the maintenance job scheduling problem creates a unique challenge. In particular, we show that prior machine learning methods that produce the lowest error predictions do not produce the best scheduling outcomes due to asymmetric costs. Specifically, underpredicting maintenance job duration has results in more servers being taken offline and longer server downtime than overpredicting maintenance job duration. The system cost of underprediction is much larger than that of overprediction. We present Acela, a machine learning system for predicting maintenance job duration, which uses quantile regression to bias duration predictions toward overprediction. We integrate Acela into a maintenance job scheduler and evaluate it on datasets from large-scale, production datacenters. Compared to machine learning based predictors from prior work, Acela reduces the number of servers that are taken offline by 1.87-4.28X, and reduces the server offline time by 1.40-2.80X.
translated by 谷歌翻译
We study the expressibility and learnability of convex optimization solution functions and their multi-layer architectural extension. The main results are: \emph{(1)} the class of solution functions of linear programming (LP) and quadratic programming (QP) is a universal approximant for the $C^k$ smooth model class or some restricted Sobolev space, and we characterize the rate-distortion, \emph{(2)} the approximation power is investigated through a viewpoint of regression error, where information about the target function is provided in terms of data observations, \emph{(3)} compositionality in the form of a deep architecture with optimization as a layer is shown to reconstruct some basic functions used in numerical analysis without error, which implies that \emph{(4)} a substantial reduction in rate-distortion can be achieved with a universal network architecture, and \emph{(5)} we discuss the statistical bounds of empirical covering numbers for LP/QP, as well as a generic optimization problem (possibly nonconvex) by exploiting tame geometry. Our results provide the \emph{first rigorous analysis of the approximation and learning-theoretic properties of solution functions} with implications for algorithmic design and performance guarantees.
translated by 谷歌翻译
Recovery of true color from underwater images is an ill-posed problem. This is because the wide-band attenuation coefficients for the RGB color channels depend on object range, reflectance, etc. which are difficult to model. Also, there is backscattering due to suspended particles in water. Thus, most existing deep-learning based color restoration methods, which are trained on synthetic underwater datasets, do not perform well on real underwater data. This can be attributed to the fact that synthetic data cannot accurately represent real conditions. To address this issue, we use an image to image translation network to bridge the gap between the synthetic and real domains by translating images from synthetic underwater domain to real underwater domain. Using this multimodal domain adaptation technique, we create a dataset that can capture a diverse array of underwater conditions. We then train a simple but effective CNN based network on our domain adapted dataset to perform color restoration. Code and pre-trained models can be accessed at https://github.com/nehamjain10/TRUDGCR
translated by 谷歌翻译
Consider two brands that want to jointly test alternate web experiences for their customers with an A/B test. Such collaborative tests are today enabled using \textit{third-party cookies}, where each brand has information on the identity of visitors to another website. With the imminent elimination of third-party cookies, such A/B tests will become untenable. We propose a two-stage experimental design, where the two brands only need to agree on high-level aggregate parameters of the experiment to test the alternate experiences. Our design respects the privacy of customers. We propose an estimater of the Average Treatment Effect (ATE), show that it is unbiased and theoretically compute its variance. Our demonstration describes how a marketer for a brand can design such an experiment and analyze the results. On real and simulated data, we show that the approach provides valid estimate of the ATE with low variance and is robust to the proportion of visitors overlapping across the brands.
translated by 谷歌翻译
Spiking Neural Networks (SNNs) are bio-plausible models that hold great potential for realizing energy-efficient implementations of sequential tasks on resource-constrained edge devices. However, commercial edge platforms based on standard GPUs are not optimized to deploy SNNs, resulting in high energy and latency. While analog In-Memory Computing (IMC) platforms can serve as energy-efficient inference engines, they are accursed by the immense energy, latency, and area requirements of high-precision ADCs (HP-ADC), overshadowing the benefits of in-memory computations. We propose a hardware/software co-design methodology to deploy SNNs into an ADC-Less IMC architecture using sense-amplifiers as 1-bit ADCs replacing conventional HP-ADCs and alleviating the above issues. Our proposed framework incurs minimal accuracy degradation by performing hardware-aware training and is able to scale beyond simple image classification tasks to more complex sequential regression tasks. Experiments on complex tasks of optical flow estimation and gesture recognition show that progressively increasing the hardware awareness during SNN training allows the model to adapt and learn the errors due to the non-idealities associated with ADC-Less IMC. Also, the proposed ADC-Less IMC offers significant energy and latency improvements, $2-7\times$ and $8.9-24.6\times$, respectively, depending on the SNN model and the workload, compared to HP-ADC IMC.
translated by 谷歌翻译
分散的学习算法可以通过在不同设备和位置生成的大型分布式数据集对深度学习模型进行培训,而无需中央服务器。在实际情况下,分布式数据集可以在整个代理之间具有显着不同的数据分布。当前的最新分散算法主要假设数据分布是独立且分布相同的(IID)。本文的重点是用最小的计算和内存开销来改善非IID数据分布的分散学习。我们提出了邻居梯度聚类(NGC),这是一种新型的分散学习算法,使用自我和交叉梯度信息修改每个代理的局部梯度。特别是,所提出的方法用自级的加权平均值,模型变化的跨梯度(接收到的邻居模型参数相对于本地数据集的衍生物)和数据变化,将模型的局部梯度取代了模型变化的均值平均值交叉梯度(相对于其邻居数据集的本地模型的衍生物)。此外,我们提出了compngc,这是NGC的压缩版本,通过压缩交叉梯度将通信开销降低了$ 32 \ times $。我们证明了所提出的技术在各种模型体系结构和图形拓扑上采样的非IID数据分布上提出的技术的经验收敛性和效率。我们的实验表明,NGC和COMPNGC的表现优于现有的最先进的(SOTA)去中心化学习算法,而不是非IID数据的$ 1-5 \%$,其计算和内存需求明显降低。此外,我们还表明,所提出的NGC方法的表现优于$ 5-40 \%$,而没有其他交流。
translated by 谷歌翻译
电缆在房屋,医院和工业仓库中很普遍,容易纠结。本文通过引入新颖的不确定性定量指标和与电缆相互作用以减少感知不确定性相互作用的新型不确定性定量指标和动作,扩展了对自动释放长电缆的先前工作。我们为Tangle操纵2.0(SGTM 2.0)提供了滑动和握力,该系统使用双边机器人自动解开大约3米长的电缆,并使用每个步骤的不确定性估算值估计,以告知动作。通过互动降低不确定性,缠结操作2.0(SGTM 2.0)的滑动和握住可以减少其必须采用的状态排列动作的数量,从而大大加快运行时间。实验表明,SGTM 2.0可以在1或2台上和图8节的电缆上取得83%的脱节成功,并且在这些配置中的70%终止检测成功,在无障碍精度上优于SGTM 1.0,超过43%,在全部推出速度上超过200% 。可以在sites.google.com/view/sgtm2上找到补充材料,可视化和视频。
translated by 谷歌翻译
已经证明,深层合奏将典型的集体学习中看到的积极效果扩展到神经网络和增强学习(RL)。但是,要提高此类整体模型的效率仍然有很多事情要做。在这项工作中,我们介绍了在RL(feft)中快速传输的各种合奏,这是一种基于合奏的新方法,用于在高度多模式环境中进行增强学习,并改善了转移到看不见的环境。该算法分为两个主要阶段:合奏成员的培训,以及合成成员的合成(或微调)成员,以在新环境中起作用。该算法的第一阶段涉及并行培训常规的政策梯度或参与者 - 批评者,但增加了鼓励这些政策彼此不同的损失。这会导致单个单峰剂探索最佳策略的空间,并捕获与单个参与者相比,捕获环境的多模式的更多。 DEFT的第二阶段涉及将组件策略综合为新的策略,该策略以两种方式之一在修改的环境中效果很好。为了评估DEFT的性能,我们从近端策略优化(PPO)算法的基本版本开始,并通过faft的修改将其扩展。我们的结果表明,预处理阶段可有效地在多模式环境中产生各种策略。除了替代方案,faft通常会收敛到高奖励的速度要快得多,例如随机初始化而无需faft和合奏成员的微调。虽然当然还有更多的工作来分析理论上的熟练并将其扩展为更强大,但我们认为,它为在环境中捕获多模式的框架提供了一个强大的框架,同时仍将使用简单策略表示的RL方法。
translated by 谷歌翻译